Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Abstract Human mobility is becoming increasingly complex in urban environments. However, our fundamental understanding of urban population dynamics, particularly the pulsating fluctuations occurring across different locations and timescales, remains limited. Here, we use mobile device data from large cities and regions worldwide combined with a detrended fractal analysis to uncover a universal spatiotemporal scaling law that governs urban population fluctuations. This law reveals the scale invariance of these fluctuations, spanning from city centers to peripheries over both time and space. Moreover, we show that at any given location, fluctuations obey a time-based scaling law characterized by a spatially decaying exponent, which quantifies their relationship with urban structure. These interconnected discoveries culminate in a robust allometric equation that links population dynamics with urban densities, providing a powerful framework for predicting and managing the complexities of urban human activities. Collectively, this study paves the way for more effective urban planning, transportation strategies, and policies grounded in population dynamics, thereby fostering the development of resilient and sustainable cities.more » « lessFree, publicly-accessible full text available December 1, 2026
- 
            Free, publicly-accessible full text available June 25, 2026
- 
            Free, publicly-accessible full text available July 9, 2026
- 
            Free, publicly-accessible full text available June 1, 2026
- 
            Free, publicly-accessible full text available May 1, 2026
- 
            Free, publicly-accessible full text available April 1, 2026
- 
            This study investigates innovative interaction designs for communication and collaborative learning between learners of mixed hearing and signing abilities, leveraging advancements in mixed reality technologies like Apple Vision Pro and generative AI for animated avatars. Adopting a participatory design approach, we engaged 15 d/Deaf and hard of hearing (DHH) students to brainstorm ideas for an AI avatar with interpreting ability (sign language to English and English to sign language) that would facilitate their face-to-face communication with hearing peers. Participants envisioned the AI avatars to address some issues with human interpreters, such as lack of availability, and provide affordable options to expensive personalized interpreting services. Our findings indicate a range of preferences for integrating the AI avatars with actual human figures of both DHH and hearing communication partners. The participants highlighted the importance of having control over customizing the AI avatar, such as AI-generated signs, voices, facial expressions, and their synchronization for enhanced emotional display in communication. Based on our findings, we propose a suite of design recommendations that balance respecting sign language norms with adherence to hearing social norms. Our study offers insights into improving the authenticity of generative AI in scenarios involving specific and sometimes unfamiliar social norms.more » « lessFree, publicly-accessible full text available May 2, 2026
- 
            Generative AI tools, particularly those utilizing large language models (LLMs), are increasingly used in everyday contexts. While these tools enhance productivity and accessibility, little is known about how Deaf and Hard of Hearing (DHH) individuals engage with them or the challenges they face when using them. This paper presents a mixed-method study exploring how the DHH community uses Text AI tools like ChatGPT to reduce communication barriers and enhance information access. We surveyed 80 DHH participants and conducted interviews with 9 participants. Our findings reveal important benefits, such as eased communication and bridging Deaf and hearing cultures, alongside challenges like lack of American Sign Language (ASL) support and Deaf cultural understanding. We highlight unique usage patterns, propose inclusive design recommendations, and outline future research directions to improve Text AI accessibility for the DHH community.more » « lessFree, publicly-accessible full text available April 25, 2026
- 
            A data assimilation approach is coined that enables the discovery of forcing functions in Lagrangian, point-particle models from limited measurements of trajectory coordinates. Central to the proposed formulation of this inverse problem is the expression of the forcing function in terms of modal basis functions that are dependent on the relative velocity difference between a known carrier flow and the particle solution weighted with coefficients that are known within confidence intervals. The probability density function of the random forcing coefficients is inferred using a combination of the forward, particle model and its adjoint dynamics, which calculates the gradient of the cost function defined as the distance between the measured and predicted particle locations. To ensure convergence of the gradient-based optimization, multiple measurements may be required. If the measurements are noisy, samples of the forcing model within an assumed Gaussian distribution of the confidence interval of the measurement are computed using a Hamiltonian Monte Carlo method. The method is verified to correctly infer the forcing function of particles traced in the Arnold–Beltrami–Childress flow and a homogeneous isotropic turbulence. The confidence interval of the inferred forcing function with respect to a flow condition is improved if the particle is exposed more frequently to the flow condition. The forcing coefficients adapt the model to flow conditions that are outside of the limited range for which the point-particle models are typically known only empirically or within confidence intervals.more » « lessFree, publicly-accessible full text available April 1, 2026
- 
            Accessibility efforts for d/Deaf and hard of hearing (DHH) learners in video-based learning have mainly focused on captions and interpreters, with limited attention to learners' emotional awareness--an important yet challenging skill for effective learning. Current emotion technologies are designed to support learners' emotional awareness and social needs; however, little is known about whether and how DHH learners could benefit from these technologies. Our study explores how DHH learners perceive and use emotion data from two collection approaches, self-reported and automatic emotion recognition (AER), in video-based learning. By comparing the use of these technologies between DHH (N=20) and hearing learners (N=20), we identified key differences in their usage and perceptions: 1) DHH learners enhanced their emotional awareness by rewatching the video to self-report their emotions and called for alternative methods for self-reporting emotion, such as using sign language or expressive emoji designs; and 2) while the AER technology could be useful for detecting emotional patterns in learning experiences, DHH learners expressed more concerns about the accuracy and intrusiveness of the AER data. Our findings provide novel design implications for improving the inclusiveness of emotion technologies to support DHH learners, such as leveraging DHH peer learners' emotions to elicit reflections.more » « lessFree, publicly-accessible full text available May 2, 2026
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
